41 research outputs found

    â„“1\ell^1-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?

    Full text link
    This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the â„“1\ell^{1}-analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence structure. Our findings defy conventional wisdom which promotes the sparsity of analysis coefficients as the crucial quantity to study. In fact, this common paradigm breaks down completely in many situations of practical interest, for instance, when applying a redundant (multilevel) frame as analysis prior. By extensive numerical experiments, we demonstrate that, in contrast, our theoretical sampling-rate bounds reliably capture the recovery capability of various examples, such as redundant Haar wavelets systems, total variation, or random frames. The proofs of our main results build upon recent achievements in the convex geometry of data mining problems. More precisely, we establish a sophisticated upper bound on the conic Gaussian mean width that is associated with the underlying â„“1\ell^{1}-analysis polytope. Due to a novel localization argument, it turns out that the presented framework naturally extends to stable recovery, allowing us to incorporate compressible coefficient sequences as well

    Compressed Sensing with 1D Total Variation: Breaking Sample Complexity Barriers via Non-Uniform Recovery (iTWIST'20)

    Get PDF
    This paper investigates total variation minimization in one spatial dimension for the recovery of gradient-sparse signals from undersampled Gaussian measurements. Recently established bounds for the required sampling rate state that uniform recovery of all ss-gradient-sparse signals in Rn\mathbb{R}^n is only possible with m≳sn⋅PolyLog(n)m \gtrsim \sqrt{s n} \cdot \text{PolyLog}(n) measurements. Such a condition is especially prohibitive for high-dimensional problems, where ss is much smaller than nn. However, previous empirical findings seem to indicate that the latter sampling rate does not reflect the typical behavior of total variation minimization. Indeed, this work provides a rigorous analysis that breaks the sn\sqrt{s n}-bottleneck for a large class of natural signals. The main result shows that non-uniform recovery succeeds with high probability for m≳s⋅PolyLog(n)m \gtrsim s \cdot \text{PolyLog}(n) measurements if the jump discontinuities of the signal vector are sufficiently well separated. In particular, this guarantee allows for signals arising from a discretization of piecewise constant functions defined on an interval. The present paper serves as a short summary of the main results in our recent work [arxiv:2001.09952].Comment: in Proceedings of iTWIST'20, Paper-ID: 32, Nantes, France, December, 2-4, 2020. arXiv admin note: substantial text overlap with arXiv:2001.0995

    Let's Enhance: A Deep Learning Approach to Extreme Deblurring of Text Images

    Full text link
    This work presents a novel deep-learning-based pipeline for the inverse problem of image deblurring, leveraging augmentation and pre-training with synthetic data. Our results build on our winning submission to the recent Helsinki Deblur Challenge 2021, whose goal was to explore the limits of state-of-the-art deblurring algorithms in a real-world data setting. The task of the challenge was to deblur out-of-focus images of random text, thereby in a downstream task, maximizing an optical-character-recognition-based score function. A key step of our solution is the data-driven estimation of the physical forward model describing the blur process. This enables a stream of synthetic data, generating pairs of ground-truth and blurry images on-the-fly, which is used for an extensive augmentation of the small amount of challenge data provided. The actual deblurring pipeline consists of an approximate inversion of the radial lens distortion (determined by the estimated forward model) and a U-Net architecture, which is trained end-to-end. Our algorithm was the only one passing the hardest challenge level, achieving over 70%70\% character recognition accuracy. Our findings are well in line with the paradigm of data-centric machine learning, and we demonstrate its effectiveness in the context of inverse problems. Apart from a detailed presentation of our methodology, we also analyze the importance of several design choices in a series of ablation studies. The code of our challenge submission is available under https://github.com/theophil-trippe/HDC_TUBerlin_version_1.Comment: This article has been published in a revised form in Inverse Problems and Imagin

    Shearlet-based compressed sensing for fast 3D cardiac MR imaging using iterative reweighting

    Full text link
    High-resolution three-dimensional (3D) cardiovascular magnetic resonance (CMR) is a valuable medical imaging technique, but its widespread application in clinical practice is hampered by long acquisition times. Here we present a novel compressed sensing (CS) reconstruction approach using shearlets as a sparsifying transform allowing for fast 3D CMR (3DShearCS). Shearlets are mathematically optimal for a simplified model of natural images and have been proven to be more efficient than classical systems such as wavelets. Data is acquired with a 3D Radial Phase Encoding (RPE) trajectory and an iterative reweighting scheme is used during image reconstruction to ensure fast convergence and high image quality. In our in-vivo cardiac MRI experiments we show that the proposed method 3DShearCS has lower relative errors and higher structural similarity compared to the other reconstruction techniques especially for high undersampling factors, i.e. short scan times. In this paper, we further show that 3DShearCS provides improved depiction of cardiac anatomy (measured by assessing the sharpness of coronary arteries) and two clinical experts qualitatively analyzed the image quality

    Shearlet-based regularization in sparse dynamic tomography

    Get PDF
    Peer reviewe

    Auxotrophy to Xeno-DNA: an exploration of combinatorial mechanisms for a high-fidelity biosafety system for synthetic biology applications.

    Get PDF
    BACKGROUND: Biosafety is a key aspect in the international Genetically Engineered Machine (iGEM) competition, which offers student teams an amazing opportunity to pursue their own research projects in the field of Synthetic Biology. iGEM projects often involve the creation of genetically engineered bacterial strains. To minimize the risks associated with bacterial release, a variety of biosafety systems were constructed, either to prevent survival of bacteria outside the lab or to hinder horizontal or vertical gene transfer. MAIN BODY: Physical containment methods such as bioreactors or microencapsulation are considered the first safety level. Additionally, various systems involving auxotrophies for both natural and synthetic compounds have been utilized by iGEM teams in recent years. Combinatorial systems comprising multiple auxotrophies have been shown to reduced escape frequencies below the detection limit. Furthermore, a number of natural toxin-antitoxin systems can be deployed to kill cells under certain conditions. Additionally, parts of naturally occurring toxin-antitoxin systems can be used for the construction of 'kill switches' controlled by synthetic regulatory modules, allowing control of cell survival. Kill switches prevent cell survival but do not completely degrade nucleic acids. To avoid horizontal gene transfer, multiple mechanisms to cleave nucleic acids can be employed, resulting in 'self-destruction' of cells. Changes in light or temperature conditions are powerful regulators of gene expression and could serve as triggers for kill switches or self-destruction systems. Xenobiology-based containment uses applications of Xeno-DNA, recoded codons and non-canonical amino acids to nullify the genetic information of constructed cells for wild type organisms. A 'minimal genome' approach brings the opportunity to reduce the genome of a cell to only genes necessary for survival under lab conditions. Such cells are unlikely to survive in the natural environment and are thus considered safe hosts. If suitable for the desired application, a shift to cell-free systems based on Xeno-DNA may represent the ultimate biosafety system. CONCLUSION: Here we describe different containment approaches in synthetic biology, ranging from auxotrophies to minimal genomes, which can be combined to significantly improve reliability. Since the iGEM competition greatly increases the number of people involved in synthetic biology, we will focus especially on biosafety systems developed and applied in the context of the iGEM competition
    corecore